Fault Pruning: Robust Training of Neural Networks with Memristive Weights

نویسندگان

چکیده

Neural networks with memristive memory for weights have been proposed as an energy-efficient solution scaling up of neural network implementations. However, training such is still challenging due to various memristor imperfections and faulty elements. Such faults are becoming increasingly severe the density arrays increases in order scale weight memory. We propose fault pruning, a robust scheme based on idea identify behavior fly during prune corresponding connections. test this algorithm simulations using both feed-forward convolutional architectures standard object recognition data sets. show its ability mitigate detrimental effect training.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications...

متن کامل

Training Neural Networks with 3–bit Integer Weights

In this work we present neural network training algorithms, which are based on the differential evolution (DE) strategies introduced by Storn and Price [Journal of Global Optimization. 11:341–359, 1997]. These strategies are applied to train neural networks with 3–bit integer weights. Integer weight neural networks are better suited for hardware implementation than their real weight analogous. ...

متن کامل

Constructive Training Methods for feedforward Neural Networks with Binary weights

Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. A neural model with each weight limited to a small integer range will require little surface of silicon. Moreover, according to Occam's razor principle, better generalization abilities can be expected from a simpler computational model. The price to pay...

متن کامل

Robust Fault Detection on Boiler-turbine Unit Actuators Using Dynamic Neural Networks

Due to the important role of the boiler-turbine units in industries and electricity generation, it is important to diagnose different types of faults in different parts of boiler-turbine system. Different parts of a boiler-turbine system like the sensor or actuator or plant can be affected by various types of faults. In this paper, the effects of the occurrence of faults on the actuators are in...

متن کامل

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

We introduce a method to train Quantized Neural Networks (QNNs) — neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2023

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-031-34034-5_9